首页> 外文OA文献 >Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware
【2h】

Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware

机译:人工递归神经网络在尖峰神经网络中的转换   用于低功率神经形态硬件的网络

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In recent years the field of neuromorphic low-power systems that consumeorders of magnitude less power gained significant momentum. However, theirwider use is still hindered by the lack of algorithms that can harness thestrengths of such architectures. While neuromorphic adaptations ofrepresentation learning algorithms are now emerging, efficient processing oftemporal sequences or variable length-inputs remain difficult. Recurrent neuralnetworks (RNN) are widely used in machine learning to solve a variety ofsequence learning tasks. In this work we present a train-and-constrainmethodology that enables the mapping of machine learned (Elman) RNNs on asubstrate of spiking neurons, while being compatible with the capabilities ofcurrent and near-future neuromorphic systems. This "train-and-constrain" methodconsists of first training RNNs using backpropagation through time, thendiscretizing the weights and finally converting them to spiking RNNs bymatching the responses of artificial neurons with those of the spiking neurons.We demonstrate our approach by mapping a natural language processing task(question classification), where we demonstrate the entire mapping process ofthe recurrent layer of the network on IBM's Neurosynaptic System "TrueNorth", aspike-based digital neuromorphic hardware architecture. TrueNorth imposesspecific constraints on connectivity, neural and synaptic parameters. Tosatisfy these constraints, it was necessary to discretize the synaptic weightsand neural activities to 16 levels, and to limit fan-in to 64 inputs. We findthat short synaptic delays are sufficient to implement the dynamical (temporal)aspect of the RNN in the question classification task. The hardware-constrainedmodel achieved 74% accuracy in question classification while using less than0.025% of the cores on one TrueNorth chip, resulting in an estimated powerconsumption of ~17 uW.
机译:近年来,功耗降低了几个数量级的神经形态低功率系统领域获得了显着势头。但是,由于缺乏可以利用此类体系结构优势的算法,仍然阻碍了它们的广泛使用。尽管现在出现了表示学习算法的神经形态适应,但是仍然难以有效地处理时间序列或可变长度输入。递归神经网络(RNN)广泛用于机器学习中,以解决各种序列学习任务。在这项工作中,我们提出了一种训练和约束方法,该方法能够在尖峰神经元的基底上映射机器学习(Elman)RNN,同时与当前和近期的神经形态系统的功能兼容。这种“训练和约束”方法包括先通过时间反向传播训练RNN,然后离散权重,最后通过将人造神经元的响应与峰值神经元的响应进行匹配,将权重转换为峰值RNN。我们通过映射自然语言来证明我们的方法处理任务(问题分类),我们在IBM神经突触系统“ TrueNorth”(基于aspike的数字神经形态硬件架构)上演示了网络循环层的整个映射过程。 TrueNorth对连接性,神经和突触参数施加特定的约束。为了满足这些约束,有必要将突触权重和神经活动离散化到16个级别,并将扇入限制为64个输入。我们发现,短的突触延迟足以在问题分类任务中实现RNN的动态(时间)方面。硬件受限的模型在一个问题上的分类精度达到了74%,而在一块TrueNorth芯片上使用的内核却不到0.025%,因此估计功耗约为17uW。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号